Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [1]:
#data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [2]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[2]:
<matplotlib.image.AxesImage at 0x7f8bfa6accc0>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [3]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[3]:
<matplotlib.image.AxesImage at 0x7f8bfa59fda0>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [5]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    input_real = tf.placeholder(tf.float32,(None, image_width, image_height, image_channels))
    input_z = tf.placeholder(tf.float32, (None, z_dim))

    lr = tf.placeholder(tf.float32)

    return input_real, input_z, lr


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).

In [6]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param image: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    alpha = 0.2
    with tf.variable_scope('discriminator', reuse=reuse):
        # Input layer is 28x28ximage_channels
        x1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='valid')
        relu1 = tf.maximum(alpha * x1, x1)
        # 12x12x64
        x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
        bn2 = tf.layers.batch_normalization(x2, training=True)
        relu2 = tf.maximum(alpha * bn2, bn2)
        # 6x6x128
        x3 = tf.layers.conv2d(relu2, 256, 5, strides=1, padding='same')
        bn3 = tf.layers.batch_normalization(x3, training=True)
        relu3 = tf.maximum(alpha * bn3, bn3)
        # 6x6x256
        # Flatten it
        flat = tf.reshape(relu3, (-1, 6*6*256))
        logits = tf.layers.dense(flat, 1)
        out = tf.sigmoid(logits)
        return out, logits

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [7]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    alpha = 0.2
    channel_factor = 4
    with tf.variable_scope('generator', reuse=not is_train):
        # First fully connected layer
        x1 = tf.layers.dense(z, 7*7*channel_factor*100)
        # Reshape it to start the convolutional stack
        x1 = tf.reshape(x1, (-1, 7, 7, channel_factor*100))
        x1 = tf.layers.batch_normalization(x1, training=is_train)
        x1 = tf.maximum(alpha * x1, x1)
        # 7x7x400
        x2 = tf.layers.conv2d_transpose(x1, channel_factor*50, 5, strides=2, padding='same')
        x2 = tf.layers.batch_normalization(x2, training=is_train)
        x2 = tf.maximum(alpha * x2, x2)
        # 14x14x200
        x3 = tf.layers.conv2d_transpose(x2, channel_factor*25, 5, strides=2, padding='same')
        x3 = tf.layers.batch_normalization(x3, training=is_train)
        x3 = tf.maximum(alpha * x3, x3)
        # 28x28x100
        x4 = tf.layers.conv2d(x3, channel_factor*8, 5, strides=1, padding='same')
        x4 = tf.layers.batch_normalization(x4, training=is_train)
        x4 = tf.maximum(alpha * x4, x4)
        # 28x28x32        
        # Output layer        
        logits = tf.layers.conv2d(x4, out_channel_dim, 5, strides=1, padding='same')
        # 28x28x3
                
        return tf.tanh(logits)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [8]:
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    smooth = 0.1 
    g_out = generator(input_z, out_channel_dim=out_channel_dim)
    d_out, d_real_logits = discriminator(input_real)
    d_z, d_z_logits = discriminator(g_out, True)
    d_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_real_logits, labels=tf.ones_like(d_out)*(1-smooth)))
    d_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_z_logits, labels=tf.zeros_like(d_z)))
    g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_z_logits, labels=tf.ones_like(d_z)))

    d_loss = d_real_loss + d_fake_loss                                                   

    return d_loss, g_loss

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [9]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    train_vars = tf.trainable_variables()
    d_vars = [v for v in train_vars if v.name.startswith('discriminator')]
    g_vars = [v for v in train_vars if v.name.startswith('generator')]
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_opt = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1).minimize(loss=d_loss, var_list=d_vars)
        g_opt = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1).minimize(loss=g_loss, var_list=g_vars)    
    return d_opt, g_opt

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [10]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [11]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    out_channel_dim = data_shape[3]
    input_real, input_z, lr = model_inputs(data_shape[1], data_shape[2], out_channel_dim, z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, out_channel_dim)
    d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
    i_show = 0
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                # TODO: Train Model
                batch_images = batch_images * 2
                z_images = np.random.uniform(-1, 1, size=(batch_size, z_dim))
                feed_data = {input_real: batch_images, input_z: z_images, lr:learning_rate}
                # Run twice the gen optimizer to get quick results
                sess.run(g_opt, feed_dict=feed_data)
                sess.run(g_opt, feed_dict=feed_data)
                sess.run(d_opt, feed_dict=feed_data)
                
                if i_show % 20 == 0:
                    dis_loss = sess.run(d_loss, feed_dict=feed_data)
                    gen_loss = sess.run(g_loss, feed_dict=feed_data)
                    print('Epoch %s/%s, Step %s, dis_loss: %s, gen_loss %s' % (epoch_i+1, epoch_count, i_show, dis_loss, gen_loss))
                if i_show % 100 == 0:
                    show_generator_output(sess, 64, input_z, out_channel_dim, data_image_mode)
                i_show += 1
                

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [12]:
batch_size = 32
z_dim = 100
l_rate = 0.0005
beta1 = 0.5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, l_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
Epoch 1/2, Step 0, dis_loss: 0.394276, gen_loss 12.3164
Epoch 1/2, Step 20, dis_loss: 1.98273, gen_loss 5.63655
Epoch 1/2, Step 40, dis_loss: 1.45363, gen_loss 0.833335
Epoch 1/2, Step 60, dis_loss: 1.40562, gen_loss 0.994229
Epoch 1/2, Step 80, dis_loss: 1.35689, gen_loss 1.02191
Epoch 1/2, Step 100, dis_loss: 1.56469, gen_loss 1.53884
Epoch 1/2, Step 120, dis_loss: 1.34807, gen_loss 1.14543
Epoch 1/2, Step 140, dis_loss: 1.3536, gen_loss 1.04779
Epoch 1/2, Step 160, dis_loss: 1.29729, gen_loss 0.849144
Epoch 1/2, Step 180, dis_loss: 1.37718, gen_loss 0.626941
Epoch 1/2, Step 200, dis_loss: 1.48213, gen_loss 0.541951
Epoch 1/2, Step 220, dis_loss: 1.40068, gen_loss 1.47573
Epoch 1/2, Step 240, dis_loss: 1.3928, gen_loss 1.35989
Epoch 1/2, Step 260, dis_loss: 1.32449, gen_loss 0.920786
Epoch 1/2, Step 280, dis_loss: 1.37801, gen_loss 1.10411
Epoch 1/2, Step 300, dis_loss: 1.48483, gen_loss 1.26635
Epoch 1/2, Step 320, dis_loss: 1.32293, gen_loss 1.03646
Epoch 1/2, Step 340, dis_loss: 1.34024, gen_loss 0.795837
Epoch 1/2, Step 360, dis_loss: 1.30892, gen_loss 0.793689
Epoch 1/2, Step 380, dis_loss: 1.33615, gen_loss 0.7459
Epoch 1/2, Step 400, dis_loss: 1.36582, gen_loss 0.840885
Epoch 1/2, Step 420, dis_loss: 1.2663, gen_loss 0.883857
Epoch 1/2, Step 440, dis_loss: 1.39356, gen_loss 0.642088
Epoch 1/2, Step 460, dis_loss: 1.36881, gen_loss 0.614022
Epoch 1/2, Step 480, dis_loss: 1.40583, gen_loss 1.1519
Epoch 1/2, Step 500, dis_loss: 1.32761, gen_loss 1.0873
Epoch 1/2, Step 520, dis_loss: 1.39634, gen_loss 1.26426
Epoch 1/2, Step 540, dis_loss: 1.35122, gen_loss 0.670133
Epoch 1/2, Step 560, dis_loss: 1.35432, gen_loss 1.09947
Epoch 1/2, Step 580, dis_loss: 1.43308, gen_loss 0.620769
Epoch 1/2, Step 600, dis_loss: 1.38964, gen_loss 0.871076
Epoch 1/2, Step 620, dis_loss: 1.36687, gen_loss 0.912133
Epoch 1/2, Step 640, dis_loss: 1.31625, gen_loss 0.967407
Epoch 1/2, Step 660, dis_loss: 1.36163, gen_loss 0.87153
Epoch 1/2, Step 680, dis_loss: 1.49511, gen_loss 0.487198
Epoch 1/2, Step 700, dis_loss: 1.39227, gen_loss 0.569244
Epoch 1/2, Step 720, dis_loss: 1.28841, gen_loss 1.05553
Epoch 1/2, Step 740, dis_loss: 1.44925, gen_loss 0.524004
Epoch 1/2, Step 760, dis_loss: 1.35502, gen_loss 0.842477
Epoch 1/2, Step 780, dis_loss: 1.3445, gen_loss 0.717599
Epoch 1/2, Step 800, dis_loss: 1.38201, gen_loss 0.678613
Epoch 1/2, Step 820, dis_loss: 1.41162, gen_loss 0.93383
Epoch 1/2, Step 840, dis_loss: 1.38298, gen_loss 0.633429
Epoch 1/2, Step 860, dis_loss: 1.35849, gen_loss 0.792673
Epoch 1/2, Step 880, dis_loss: 1.39187, gen_loss 0.602735
Epoch 1/2, Step 900, dis_loss: 1.44755, gen_loss 1.2746
Epoch 1/2, Step 920, dis_loss: 1.36564, gen_loss 0.777609
Epoch 1/2, Step 940, dis_loss: 1.41201, gen_loss 1.10108
Epoch 1/2, Step 960, dis_loss: 1.36943, gen_loss 0.777649
Epoch 1/2, Step 980, dis_loss: 1.31941, gen_loss 0.703468
Epoch 1/2, Step 1000, dis_loss: 1.33852, gen_loss 0.886544
Epoch 1/2, Step 1020, dis_loss: 1.40845, gen_loss 0.942452
Epoch 1/2, Step 1040, dis_loss: 1.38958, gen_loss 1.12156
Epoch 1/2, Step 1060, dis_loss: 1.33962, gen_loss 0.970782
Epoch 1/2, Step 1080, dis_loss: 1.32968, gen_loss 0.739194
Epoch 1/2, Step 1100, dis_loss: 1.35816, gen_loss 1.04737
Epoch 1/2, Step 1120, dis_loss: 1.34032, gen_loss 0.959886
Epoch 1/2, Step 1140, dis_loss: 1.3768, gen_loss 0.700669
Epoch 1/2, Step 1160, dis_loss: 1.34242, gen_loss 0.792911
Epoch 1/2, Step 1180, dis_loss: 1.34579, gen_loss 0.718619
Epoch 1/2, Step 1200, dis_loss: 1.35514, gen_loss 0.854638
Epoch 1/2, Step 1220, dis_loss: 1.38851, gen_loss 1.13222
Epoch 1/2, Step 1240, dis_loss: 1.31588, gen_loss 0.721941
Epoch 1/2, Step 1260, dis_loss: 1.36543, gen_loss 0.841855
Epoch 1/2, Step 1280, dis_loss: 1.36099, gen_loss 0.86363
Epoch 1/2, Step 1300, dis_loss: 1.33971, gen_loss 1.04114
Epoch 1/2, Step 1320, dis_loss: 1.34875, gen_loss 1.07799
Epoch 1/2, Step 1340, dis_loss: 1.38705, gen_loss 0.647773
Epoch 1/2, Step 1360, dis_loss: 1.38481, gen_loss 1.01168
Epoch 1/2, Step 1380, dis_loss: 1.37337, gen_loss 1.01732
Epoch 1/2, Step 1400, dis_loss: 1.35029, gen_loss 0.829197
Epoch 1/2, Step 1420, dis_loss: 1.40334, gen_loss 1.06349
Epoch 1/2, Step 1440, dis_loss: 1.39238, gen_loss 0.630357
Epoch 1/2, Step 1460, dis_loss: 1.34143, gen_loss 0.842959
Epoch 1/2, Step 1480, dis_loss: 1.40791, gen_loss 0.675225
Epoch 1/2, Step 1500, dis_loss: 1.36347, gen_loss 0.75765
Epoch 1/2, Step 1520, dis_loss: 1.38088, gen_loss 0.62051
Epoch 1/2, Step 1540, dis_loss: 1.31664, gen_loss 0.839878
Epoch 1/2, Step 1560, dis_loss: 1.45934, gen_loss 1.23388
Epoch 1/2, Step 1580, dis_loss: 1.37595, gen_loss 1.01796
Epoch 1/2, Step 1600, dis_loss: 1.41372, gen_loss 1.02732
Epoch 1/2, Step 1620, dis_loss: 1.38, gen_loss 0.6702
Epoch 1/2, Step 1640, dis_loss: 1.36357, gen_loss 0.93237
Epoch 1/2, Step 1660, dis_loss: 1.44617, gen_loss 0.529897
Epoch 1/2, Step 1680, dis_loss: 1.37559, gen_loss 0.681032
Epoch 1/2, Step 1700, dis_loss: 1.41131, gen_loss 0.588476
Epoch 1/2, Step 1720, dis_loss: 1.36695, gen_loss 0.86042
Epoch 1/2, Step 1740, dis_loss: 1.36294, gen_loss 0.676154
Epoch 1/2, Step 1760, dis_loss: 1.35306, gen_loss 0.86801
Epoch 1/2, Step 1780, dis_loss: 1.3746, gen_loss 0.859091
Epoch 1/2, Step 1800, dis_loss: 1.36426, gen_loss 0.9239
Epoch 1/2, Step 1820, dis_loss: 1.36102, gen_loss 0.968106
Epoch 1/2, Step 1840, dis_loss: 1.3611, gen_loss 1.04167
Epoch 1/2, Step 1860, dis_loss: 1.37283, gen_loss 0.831963
Epoch 2/2, Step 1880, dis_loss: 1.39396, gen_loss 1.12086
Epoch 2/2, Step 1900, dis_loss: 1.34289, gen_loss 0.894674
Epoch 2/2, Step 1920, dis_loss: 1.38919, gen_loss 0.632415
Epoch 2/2, Step 1940, dis_loss: 1.39507, gen_loss 0.61686
Epoch 2/2, Step 1960, dis_loss: 1.36048, gen_loss 0.77094
Epoch 2/2, Step 1980, dis_loss: 1.37374, gen_loss 0.828671
Epoch 2/2, Step 2000, dis_loss: 1.37162, gen_loss 1.02429
Epoch 2/2, Step 2020, dis_loss: 1.39497, gen_loss 1.05954
Epoch 2/2, Step 2040, dis_loss: 1.36876, gen_loss 1.01953
Epoch 2/2, Step 2060, dis_loss: 1.39395, gen_loss 1.08596
Epoch 2/2, Step 2080, dis_loss: 1.3719, gen_loss 0.802356
Epoch 2/2, Step 2100, dis_loss: 1.36808, gen_loss 0.76675
Epoch 2/2, Step 2120, dis_loss: 1.37457, gen_loss 0.753057
Epoch 2/2, Step 2140, dis_loss: 1.46361, gen_loss 0.515651
Epoch 2/2, Step 2160, dis_loss: 1.36821, gen_loss 0.92058
Epoch 2/2, Step 2180, dis_loss: 1.36809, gen_loss 0.985007
Epoch 2/2, Step 2200, dis_loss: 1.38696, gen_loss 0.914312
Epoch 2/2, Step 2220, dis_loss: 1.36223, gen_loss 0.749363
Epoch 2/2, Step 2240, dis_loss: 1.41713, gen_loss 0.585656
Epoch 2/2, Step 2260, dis_loss: 1.42631, gen_loss 1.17234
Epoch 2/2, Step 2280, dis_loss: 1.43531, gen_loss 1.14883
Epoch 2/2, Step 2300, dis_loss: 1.39139, gen_loss 0.937589
Epoch 2/2, Step 2320, dis_loss: 1.36189, gen_loss 1.0005
Epoch 2/2, Step 2340, dis_loss: 1.39093, gen_loss 0.99528
Epoch 2/2, Step 2360, dis_loss: 1.3515, gen_loss 0.916985
Epoch 2/2, Step 2380, dis_loss: 1.40995, gen_loss 1.08524
Epoch 2/2, Step 2400, dis_loss: 1.35819, gen_loss 0.968441
Epoch 2/2, Step 2420, dis_loss: 1.35761, gen_loss 0.769074
Epoch 2/2, Step 2440, dis_loss: 1.36587, gen_loss 0.720871
Epoch 2/2, Step 2460, dis_loss: 1.36362, gen_loss 0.748415
Epoch 2/2, Step 2480, dis_loss: 1.37817, gen_loss 0.973181
Epoch 2/2, Step 2500, dis_loss: 1.3607, gen_loss 0.941353
Epoch 2/2, Step 2520, dis_loss: 1.38058, gen_loss 0.944505
Epoch 2/2, Step 2540, dis_loss: 1.3816, gen_loss 0.979178
Epoch 2/2, Step 2560, dis_loss: 1.40382, gen_loss 1.04668
Epoch 2/2, Step 2580, dis_loss: 1.37246, gen_loss 0.775293
Epoch 2/2, Step 2600, dis_loss: 1.36013, gen_loss 0.754758
Epoch 2/2, Step 2620, dis_loss: 1.41636, gen_loss 1.11631
Epoch 2/2, Step 2640, dis_loss: 1.40823, gen_loss 1.08085
Epoch 2/2, Step 2660, dis_loss: 1.3728, gen_loss 0.875964
Epoch 2/2, Step 2680, dis_loss: 1.3719, gen_loss 0.909287
Epoch 2/2, Step 2700, dis_loss: 1.37034, gen_loss 0.868568
Epoch 2/2, Step 2720, dis_loss: 1.38018, gen_loss 0.937084
Epoch 2/2, Step 2740, dis_loss: 1.3692, gen_loss 0.836709
Epoch 2/2, Step 2760, dis_loss: 1.42065, gen_loss 1.12305
Epoch 2/2, Step 2780, dis_loss: 1.39542, gen_loss 0.64951
Epoch 2/2, Step 2800, dis_loss: 1.44428, gen_loss 0.554465
Epoch 2/2, Step 2820, dis_loss: 1.36338, gen_loss 0.874474
Epoch 2/2, Step 2840, dis_loss: 1.4286, gen_loss 1.14266
Epoch 2/2, Step 2860, dis_loss: 1.37556, gen_loss 0.715116
Epoch 2/2, Step 2880, dis_loss: 1.38625, gen_loss 0.759691
Epoch 2/2, Step 2900, dis_loss: 1.40397, gen_loss 0.679837
Epoch 2/2, Step 2920, dis_loss: 1.38741, gen_loss 0.633222
Epoch 2/2, Step 2940, dis_loss: 1.35808, gen_loss 0.810938
Epoch 2/2, Step 2960, dis_loss: 1.39938, gen_loss 0.653603
Epoch 2/2, Step 2980, dis_loss: 1.37404, gen_loss 0.736032
Epoch 2/2, Step 3000, dis_loss: 1.392, gen_loss 0.740345
Epoch 2/2, Step 3020, dis_loss: 1.4088, gen_loss 1.02645
Epoch 2/2, Step 3040, dis_loss: 1.39029, gen_loss 0.908667
Epoch 2/2, Step 3060, dis_loss: 1.38423, gen_loss 0.903168
Epoch 2/2, Step 3080, dis_loss: 1.35954, gen_loss 0.751668
Epoch 2/2, Step 3100, dis_loss: 1.38053, gen_loss 1.02194
Epoch 2/2, Step 3120, dis_loss: 1.36396, gen_loss 0.87676
Epoch 2/2, Step 3140, dis_loss: 1.37091, gen_loss 0.917542
Epoch 2/2, Step 3160, dis_loss: 1.41806, gen_loss 1.09396
Epoch 2/2, Step 3180, dis_loss: 1.41095, gen_loss 1.02205
Epoch 2/2, Step 3200, dis_loss: 1.38533, gen_loss 0.944473
Epoch 2/2, Step 3220, dis_loss: 1.39331, gen_loss 0.989685
Epoch 2/2, Step 3240, dis_loss: 1.36536, gen_loss 0.906974
Epoch 2/2, Step 3260, dis_loss: 1.35979, gen_loss 0.794687
Epoch 2/2, Step 3280, dis_loss: 1.41285, gen_loss 0.600637
Epoch 2/2, Step 3300, dis_loss: 1.36899, gen_loss 0.706444
Epoch 2/2, Step 3320, dis_loss: 1.36736, gen_loss 0.761958
Epoch 2/2, Step 3340, dis_loss: 1.39184, gen_loss 0.644273
Epoch 2/2, Step 3360, dis_loss: 1.36419, gen_loss 0.730589
Epoch 2/2, Step 3380, dis_loss: 1.36626, gen_loss 0.803806
Epoch 2/2, Step 3400, dis_loss: 1.36646, gen_loss 0.722968
Epoch 2/2, Step 3420, dis_loss: 1.37944, gen_loss 0.669597
Epoch 2/2, Step 3440, dis_loss: 1.39396, gen_loss 0.634068
Epoch 2/2, Step 3460, dis_loss: 1.38134, gen_loss 0.705873
Epoch 2/2, Step 3480, dis_loss: 1.39313, gen_loss 0.639149
Epoch 2/2, Step 3500, dis_loss: 1.37689, gen_loss 0.852054
Epoch 2/2, Step 3520, dis_loss: 1.39174, gen_loss 1.00363
Epoch 2/2, Step 3540, dis_loss: 1.39124, gen_loss 1.00928
Epoch 2/2, Step 3560, dis_loss: 1.41477, gen_loss 1.05174
Epoch 2/2, Step 3580, dis_loss: 1.38121, gen_loss 1.0027
Epoch 2/2, Step 3600, dis_loss: 1.37822, gen_loss 0.881959
Epoch 2/2, Step 3620, dis_loss: 1.39582, gen_loss 1.01695
Epoch 2/2, Step 3640, dis_loss: 1.37646, gen_loss 0.909659
Epoch 2/2, Step 3660, dis_loss: 1.39563, gen_loss 1.0058
Epoch 2/2, Step 3680, dis_loss: 1.39593, gen_loss 0.973959
Epoch 2/2, Step 3700, dis_loss: 1.37341, gen_loss 0.934756
Epoch 2/2, Step 3720, dis_loss: 1.39245, gen_loss 1.01963
Epoch 2/2, Step 3740, dis_loss: 1.36791, gen_loss 0.897893
In [13]:
batch_size = 32
z_dim = 200
l_rate = 0.0005
beta1 = 0.5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, l_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
Epoch 1/1, Step 0, dis_loss: 0.471298, gen_loss 8.34484
Epoch 1/1, Step 20, dis_loss: 1.17619, gen_loss 1.21337
Epoch 1/1, Step 40, dis_loss: 1.51307, gen_loss 1.3403
Epoch 1/1, Step 60, dis_loss: 1.30363, gen_loss 0.891318
Epoch 1/1, Step 80, dis_loss: 1.3662, gen_loss 0.979895
Epoch 1/1, Step 100, dis_loss: 1.42067, gen_loss 1.18601
Epoch 1/1, Step 120, dis_loss: 1.42276, gen_loss 0.833744
Epoch 1/1, Step 140, dis_loss: 1.39004, gen_loss 0.664609
Epoch 1/1, Step 160, dis_loss: 1.37735, gen_loss 0.989474
Epoch 1/1, Step 180, dis_loss: 1.36501, gen_loss 0.855556
Epoch 1/1, Step 200, dis_loss: 1.37581, gen_loss 0.797447
Epoch 1/1, Step 220, dis_loss: 1.39581, gen_loss 0.695416
Epoch 1/1, Step 240, dis_loss: 1.39199, gen_loss 0.676235
Epoch 1/1, Step 260, dis_loss: 1.42231, gen_loss 1.02499
Epoch 1/1, Step 280, dis_loss: 1.37608, gen_loss 0.849482
Epoch 1/1, Step 300, dis_loss: 1.38751, gen_loss 0.794275
Epoch 1/1, Step 320, dis_loss: 1.26464, gen_loss 0.934412
Epoch 1/1, Step 340, dis_loss: 1.44708, gen_loss 0.834955
Epoch 1/1, Step 360, dis_loss: 1.35365, gen_loss 1.03456
Epoch 1/1, Step 380, dis_loss: 1.43369, gen_loss 0.671479
Epoch 1/1, Step 400, dis_loss: 1.43155, gen_loss 0.967315
Epoch 1/1, Step 420, dis_loss: 1.28646, gen_loss 1.18455
Epoch 1/1, Step 440, dis_loss: 1.40761, gen_loss 0.72004
Epoch 1/1, Step 460, dis_loss: 1.38699, gen_loss 0.724322
Epoch 1/1, Step 480, dis_loss: 1.37586, gen_loss 1.04803
Epoch 1/1, Step 500, dis_loss: 1.38425, gen_loss 0.719256
Epoch 1/1, Step 520, dis_loss: 1.39699, gen_loss 1.23353
Epoch 1/1, Step 540, dis_loss: 1.36768, gen_loss 0.74853
Epoch 1/1, Step 560, dis_loss: 1.39422, gen_loss 1.05367
Epoch 1/1, Step 580, dis_loss: 1.36138, gen_loss 0.77576
Epoch 1/1, Step 600, dis_loss: 1.33773, gen_loss 0.87228
Epoch 1/1, Step 620, dis_loss: 1.38083, gen_loss 0.830936
Epoch 1/1, Step 640, dis_loss: 1.38214, gen_loss 0.950737
Epoch 1/1, Step 660, dis_loss: 1.41111, gen_loss 0.844505
Epoch 1/1, Step 680, dis_loss: 1.41886, gen_loss 1.1396
Epoch 1/1, Step 700, dis_loss: 1.34776, gen_loss 0.841279
Epoch 1/1, Step 720, dis_loss: 1.34782, gen_loss 0.878994
Epoch 1/1, Step 740, dis_loss: 1.34335, gen_loss 1.2393
Epoch 1/1, Step 760, dis_loss: 1.37825, gen_loss 0.734452
Epoch 1/1, Step 780, dis_loss: 1.34211, gen_loss 0.822684
Epoch 1/1, Step 800, dis_loss: 1.34042, gen_loss 0.86
Epoch 1/1, Step 820, dis_loss: 1.34331, gen_loss 0.839463
Epoch 1/1, Step 840, dis_loss: 1.37341, gen_loss 0.747873
Epoch 1/1, Step 860, dis_loss: 1.40479, gen_loss 0.96092
Epoch 1/1, Step 880, dis_loss: 1.40222, gen_loss 0.698184
Epoch 1/1, Step 900, dis_loss: 1.39469, gen_loss 0.778466
Epoch 1/1, Step 920, dis_loss: 1.36663, gen_loss 0.900543
Epoch 1/1, Step 940, dis_loss: 1.35246, gen_loss 0.857098
Epoch 1/1, Step 960, dis_loss: 1.40496, gen_loss 0.90631
Epoch 1/1, Step 980, dis_loss: 1.3978, gen_loss 0.893496
Epoch 1/1, Step 1000, dis_loss: 1.37517, gen_loss 0.718288
Epoch 1/1, Step 1020, dis_loss: 1.36036, gen_loss 0.78524
Epoch 1/1, Step 1040, dis_loss: 1.38134, gen_loss 0.885488
Epoch 1/1, Step 1060, dis_loss: 1.38881, gen_loss 1.10361
Epoch 1/1, Step 1080, dis_loss: 1.38269, gen_loss 0.938368
Epoch 1/1, Step 1100, dis_loss: 1.37979, gen_loss 1.03396
Epoch 1/1, Step 1120, dis_loss: 1.36638, gen_loss 0.732656
Epoch 1/1, Step 1140, dis_loss: 1.36191, gen_loss 0.887001
Epoch 1/1, Step 1160, dis_loss: 1.35613, gen_loss 0.683944
Epoch 1/1, Step 1180, dis_loss: 1.38821, gen_loss 0.963448
Epoch 1/1, Step 1200, dis_loss: 1.39783, gen_loss 0.831165
Epoch 1/1, Step 1220, dis_loss: 1.3787, gen_loss 0.815536
Epoch 1/1, Step 1240, dis_loss: 1.38647, gen_loss 0.707814
Epoch 1/1, Step 1260, dis_loss: 1.36717, gen_loss 0.830231
Epoch 1/1, Step 1280, dis_loss: 1.38123, gen_loss 0.842218
Epoch 1/1, Step 1300, dis_loss: 1.39691, gen_loss 1.00168
Epoch 1/1, Step 1320, dis_loss: 1.40288, gen_loss 0.928013
Epoch 1/1, Step 1340, dis_loss: 1.38221, gen_loss 0.913905
Epoch 1/1, Step 1360, dis_loss: 1.38388, gen_loss 0.71137
Epoch 1/1, Step 1380, dis_loss: 1.3739, gen_loss 0.855644
Epoch 1/1, Step 1400, dis_loss: 1.39909, gen_loss 0.843025
Epoch 1/1, Step 1420, dis_loss: 1.37291, gen_loss 0.856426
Epoch 1/1, Step 1440, dis_loss: 1.37697, gen_loss 0.881055
Epoch 1/1, Step 1460, dis_loss: 1.39372, gen_loss 0.7586
Epoch 1/1, Step 1480, dis_loss: 1.36842, gen_loss 0.832199
Epoch 1/1, Step 1500, dis_loss: 1.38928, gen_loss 0.858057
Epoch 1/1, Step 1520, dis_loss: 1.3402, gen_loss 0.817449
Epoch 1/1, Step 1540, dis_loss: 1.38968, gen_loss 0.927027
Epoch 1/1, Step 1560, dis_loss: 1.35604, gen_loss 0.858294
Epoch 1/1, Step 1580, dis_loss: 1.36037, gen_loss 0.80446
Epoch 1/1, Step 1600, dis_loss: 1.3791, gen_loss 0.894276
Epoch 1/1, Step 1620, dis_loss: 1.38129, gen_loss 0.787012
Epoch 1/1, Step 1640, dis_loss: 1.39164, gen_loss 0.640686
Epoch 1/1, Step 1660, dis_loss: 1.41044, gen_loss 0.758151
Epoch 1/1, Step 1680, dis_loss: 1.37556, gen_loss 0.895675
Epoch 1/1, Step 1700, dis_loss: 1.39455, gen_loss 0.914489
Epoch 1/1, Step 1720, dis_loss: 1.35111, gen_loss 1.05601
Epoch 1/1, Step 1740, dis_loss: 1.36222, gen_loss 0.819451
Epoch 1/1, Step 1760, dis_loss: 1.382, gen_loss 1.16276
Epoch 1/1, Step 1780, dis_loss: 1.37822, gen_loss 0.801416
Epoch 1/1, Step 1800, dis_loss: 1.36645, gen_loss 0.846329
Epoch 1/1, Step 1820, dis_loss: 1.32435, gen_loss 0.841494
Epoch 1/1, Step 1840, dis_loss: 1.38955, gen_loss 0.776097
Epoch 1/1, Step 1860, dis_loss: 1.38285, gen_loss 0.882707
Epoch 1/1, Step 1880, dis_loss: 1.37806, gen_loss 0.835203
Epoch 1/1, Step 1900, dis_loss: 1.38203, gen_loss 0.837557
Epoch 1/1, Step 1920, dis_loss: 1.38623, gen_loss 0.870635
Epoch 1/1, Step 1940, dis_loss: 1.3909, gen_loss 0.775388
Epoch 1/1, Step 1960, dis_loss: 1.34633, gen_loss 0.887536
Epoch 1/1, Step 1980, dis_loss: 1.41387, gen_loss 0.759147
Epoch 1/1, Step 2000, dis_loss: 1.39532, gen_loss 1.01084
Epoch 1/1, Step 2020, dis_loss: 1.38112, gen_loss 0.757171
Epoch 1/1, Step 2040, dis_loss: 1.38764, gen_loss 0.832865
Epoch 1/1, Step 2060, dis_loss: 1.37915, gen_loss 0.890021
Epoch 1/1, Step 2080, dis_loss: 1.37861, gen_loss 0.781245
Epoch 1/1, Step 2100, dis_loss: 1.37447, gen_loss 0.783608
Epoch 1/1, Step 2120, dis_loss: 1.35544, gen_loss 0.745805
Epoch 1/1, Step 2140, dis_loss: 1.37914, gen_loss 0.792645
Epoch 1/1, Step 2160, dis_loss: 1.32967, gen_loss 0.846504
Epoch 1/1, Step 2180, dis_loss: 1.39182, gen_loss 0.713326
Epoch 1/1, Step 2200, dis_loss: 1.38221, gen_loss 0.824935
Epoch 1/1, Step 2220, dis_loss: 1.37597, gen_loss 0.760248
Epoch 1/1, Step 2240, dis_loss: 1.33715, gen_loss 0.852337
Epoch 1/1, Step 2260, dis_loss: 1.36379, gen_loss 0.885213
Epoch 1/1, Step 2280, dis_loss: 1.36419, gen_loss 0.774017
Epoch 1/1, Step 2300, dis_loss: 1.37614, gen_loss 0.787253
Epoch 1/1, Step 2320, dis_loss: 1.35781, gen_loss 0.825696
Epoch 1/1, Step 2340, dis_loss: 1.346, gen_loss 0.786497
Epoch 1/1, Step 2360, dis_loss: 1.39197, gen_loss 1.01082
Epoch 1/1, Step 2380, dis_loss: 1.35897, gen_loss 0.757214
Epoch 1/1, Step 2400, dis_loss: 1.35099, gen_loss 0.86381
Epoch 1/1, Step 2420, dis_loss: 1.3845, gen_loss 0.748033
Epoch 1/1, Step 2440, dis_loss: 1.37938, gen_loss 0.848193
Epoch 1/1, Step 2460, dis_loss: 1.3787, gen_loss 0.704039
Epoch 1/1, Step 2480, dis_loss: 1.37154, gen_loss 0.81472
Epoch 1/1, Step 2500, dis_loss: 1.37476, gen_loss 0.803486
Epoch 1/1, Step 2520, dis_loss: 1.35119, gen_loss 0.837654
Epoch 1/1, Step 2540, dis_loss: 1.37084, gen_loss 0.866499
Epoch 1/1, Step 2560, dis_loss: 1.35819, gen_loss 0.878512
Epoch 1/1, Step 2580, dis_loss: 1.40748, gen_loss 0.84805
Epoch 1/1, Step 2600, dis_loss: 1.36751, gen_loss 0.975612
Epoch 1/1, Step 2620, dis_loss: 1.36416, gen_loss 0.77618
Epoch 1/1, Step 2640, dis_loss: 1.38219, gen_loss 0.864602
Epoch 1/1, Step 2660, dis_loss: 1.39362, gen_loss 0.739157
Epoch 1/1, Step 2680, dis_loss: 1.38867, gen_loss 0.841134
Epoch 1/1, Step 2700, dis_loss: 1.38797, gen_loss 0.956012
Epoch 1/1, Step 2720, dis_loss: 1.35181, gen_loss 0.942149
Epoch 1/1, Step 2740, dis_loss: 1.37615, gen_loss 0.814443
Epoch 1/1, Step 2760, dis_loss: 1.39159, gen_loss 0.786718
Epoch 1/1, Step 2780, dis_loss: 1.37378, gen_loss 0.830609
Epoch 1/1, Step 2800, dis_loss: 1.3827, gen_loss 0.818133
Epoch 1/1, Step 2820, dis_loss: 1.36748, gen_loss 0.882345
Epoch 1/1, Step 2840, dis_loss: 1.37818, gen_loss 0.696168
Epoch 1/1, Step 2860, dis_loss: 1.38118, gen_loss 0.799844
Epoch 1/1, Step 2880, dis_loss: 1.38261, gen_loss 0.732644
Epoch 1/1, Step 2900, dis_loss: 1.36368, gen_loss 0.930605
Epoch 1/1, Step 2920, dis_loss: 1.36116, gen_loss 0.712034
Epoch 1/1, Step 2940, dis_loss: 1.37364, gen_loss 0.858962
Epoch 1/1, Step 2960, dis_loss: 1.3651, gen_loss 0.697994
Epoch 1/1, Step 2980, dis_loss: 1.35479, gen_loss 0.734383
Epoch 1/1, Step 3000, dis_loss: 1.35225, gen_loss 0.801796
Epoch 1/1, Step 3020, dis_loss: 1.37022, gen_loss 0.953216
Epoch 1/1, Step 3040, dis_loss: 1.36986, gen_loss 0.903649
Epoch 1/1, Step 3060, dis_loss: 1.37117, gen_loss 0.77324
Epoch 1/1, Step 3080, dis_loss: 1.37399, gen_loss 1.09503
Epoch 1/1, Step 3100, dis_loss: 1.36635, gen_loss 0.832554
Epoch 1/1, Step 3120, dis_loss: 1.39691, gen_loss 0.764109
Epoch 1/1, Step 3140, dis_loss: 1.3757, gen_loss 0.812589
Epoch 1/1, Step 3160, dis_loss: 1.39335, gen_loss 0.720933
Epoch 1/1, Step 3180, dis_loss: 1.38296, gen_loss 0.931593
Epoch 1/1, Step 3200, dis_loss: 1.38292, gen_loss 0.83806
Epoch 1/1, Step 3220, dis_loss: 1.3797, gen_loss 0.788011
Epoch 1/1, Step 3240, dis_loss: 1.34959, gen_loss 0.812513
Epoch 1/1, Step 3260, dis_loss: 1.37217, gen_loss 0.87604
Epoch 1/1, Step 3280, dis_loss: 1.38988, gen_loss 0.735318
Epoch 1/1, Step 3300, dis_loss: 1.36761, gen_loss 0.816881
Epoch 1/1, Step 3320, dis_loss: 1.37201, gen_loss 0.689435
Epoch 1/1, Step 3340, dis_loss: 1.37005, gen_loss 0.853752
Epoch 1/1, Step 3360, dis_loss: 1.36881, gen_loss 0.873415
Epoch 1/1, Step 3380, dis_loss: 1.39913, gen_loss 0.786426
Epoch 1/1, Step 3400, dis_loss: 1.36694, gen_loss 0.880744
Epoch 1/1, Step 3420, dis_loss: 1.36824, gen_loss 0.917498
Epoch 1/1, Step 3440, dis_loss: 1.38527, gen_loss 0.715152
Epoch 1/1, Step 3460, dis_loss: 1.3951, gen_loss 0.887699
Epoch 1/1, Step 3480, dis_loss: 1.3955, gen_loss 0.702601
Epoch 1/1, Step 3500, dis_loss: 1.36862, gen_loss 0.837935
Epoch 1/1, Step 3520, dis_loss: 1.37925, gen_loss 0.922106
Epoch 1/1, Step 3540, dis_loss: 1.36375, gen_loss 0.742072
Epoch 1/1, Step 3560, dis_loss: 1.37744, gen_loss 0.775208
Epoch 1/1, Step 3580, dis_loss: 1.37948, gen_loss 0.756886
Epoch 1/1, Step 3600, dis_loss: 1.37944, gen_loss 0.82769
Epoch 1/1, Step 3620, dis_loss: 1.37054, gen_loss 0.965508
Epoch 1/1, Step 3640, dis_loss: 1.3784, gen_loss 0.845236
Epoch 1/1, Step 3660, dis_loss: 1.37346, gen_loss 1.00726
Epoch 1/1, Step 3680, dis_loss: 1.37164, gen_loss 0.906355
Epoch 1/1, Step 3700, dis_loss: 1.37023, gen_loss 0.876013
Epoch 1/1, Step 3720, dis_loss: 1.37963, gen_loss 0.893971
Epoch 1/1, Step 3740, dis_loss: 1.37785, gen_loss 0.85228
Epoch 1/1, Step 3760, dis_loss: 1.36683, gen_loss 0.894613
Epoch 1/1, Step 3780, dis_loss: 1.38444, gen_loss 0.985358
Epoch 1/1, Step 3800, dis_loss: 1.34984, gen_loss 0.842943
Epoch 1/1, Step 3820, dis_loss: 1.36079, gen_loss 0.782132
Epoch 1/1, Step 3840, dis_loss: 1.37938, gen_loss 0.849221
Epoch 1/1, Step 3860, dis_loss: 1.35845, gen_loss 0.965296
Epoch 1/1, Step 3880, dis_loss: 1.3682, gen_loss 0.945182
Epoch 1/1, Step 3900, dis_loss: 1.37356, gen_loss 0.818303
Epoch 1/1, Step 3920, dis_loss: 1.37562, gen_loss 0.7519
Epoch 1/1, Step 3940, dis_loss: 1.3872, gen_loss 0.888105
Epoch 1/1, Step 3960, dis_loss: 1.38988, gen_loss 0.749747
Epoch 1/1, Step 3980, dis_loss: 1.37966, gen_loss 0.697912
Epoch 1/1, Step 4000, dis_loss: 1.38206, gen_loss 0.955298
Epoch 1/1, Step 4020, dis_loss: 1.37146, gen_loss 0.697519
Epoch 1/1, Step 4040, dis_loss: 1.38429, gen_loss 0.863408
Epoch 1/1, Step 4060, dis_loss: 1.37604, gen_loss 1.08884
Epoch 1/1, Step 4080, dis_loss: 1.36565, gen_loss 0.851488
Epoch 1/1, Step 4100, dis_loss: 1.36644, gen_loss 0.790593
Epoch 1/1, Step 4120, dis_loss: 1.36771, gen_loss 0.93738
Epoch 1/1, Step 4140, dis_loss: 1.38954, gen_loss 0.81753
Epoch 1/1, Step 4160, dis_loss: 1.36664, gen_loss 0.758972
Epoch 1/1, Step 4180, dis_loss: 1.39789, gen_loss 0.810989
Epoch 1/1, Step 4200, dis_loss: 1.3506, gen_loss 0.872706
Epoch 1/1, Step 4220, dis_loss: 1.33498, gen_loss 0.785685
Epoch 1/1, Step 4240, dis_loss: 1.36401, gen_loss 0.916911
Epoch 1/1, Step 4260, dis_loss: 1.36758, gen_loss 0.910085
Epoch 1/1, Step 4280, dis_loss: 1.38068, gen_loss 0.746977
Epoch 1/1, Step 4300, dis_loss: 1.37522, gen_loss 0.965225
Epoch 1/1, Step 4320, dis_loss: 1.35959, gen_loss 0.868422
Epoch 1/1, Step 4340, dis_loss: 1.3706, gen_loss 0.817077
Epoch 1/1, Step 4360, dis_loss: 1.36531, gen_loss 0.79615
Epoch 1/1, Step 4380, dis_loss: 1.38057, gen_loss 0.732446
Epoch 1/1, Step 4400, dis_loss: 1.37415, gen_loss 0.932025
Epoch 1/1, Step 4420, dis_loss: 1.37139, gen_loss 0.705685
Epoch 1/1, Step 4440, dis_loss: 1.36895, gen_loss 0.84068
Epoch 1/1, Step 4460, dis_loss: 1.35185, gen_loss 0.759476
Epoch 1/1, Step 4480, dis_loss: 1.37756, gen_loss 0.782371
Epoch 1/1, Step 4500, dis_loss: 1.38736, gen_loss 0.865186
Epoch 1/1, Step 4520, dis_loss: 1.37558, gen_loss 0.80518
Epoch 1/1, Step 4540, dis_loss: 1.36944, gen_loss 0.806795
Epoch 1/1, Step 4560, dis_loss: 1.3499, gen_loss 0.815502
Epoch 1/1, Step 4580, dis_loss: 1.37633, gen_loss 0.8417
Epoch 1/1, Step 4600, dis_loss: 1.38678, gen_loss 0.774124
Epoch 1/1, Step 4620, dis_loss: 1.36849, gen_loss 0.79674
Epoch 1/1, Step 4640, dis_loss: 1.3657, gen_loss 0.8911
Epoch 1/1, Step 4660, dis_loss: 1.37632, gen_loss 0.848608
Epoch 1/1, Step 4680, dis_loss: 1.3657, gen_loss 0.751688
Epoch 1/1, Step 4700, dis_loss: 1.3665, gen_loss 0.716798
Epoch 1/1, Step 4720, dis_loss: 1.38145, gen_loss 0.822616
Epoch 1/1, Step 4740, dis_loss: 1.37258, gen_loss 0.859619
Epoch 1/1, Step 4760, dis_loss: 1.37589, gen_loss 0.821249
Epoch 1/1, Step 4780, dis_loss: 1.3796, gen_loss 0.893526
Epoch 1/1, Step 4800, dis_loss: 1.37506, gen_loss 0.748504
Epoch 1/1, Step 4820, dis_loss: 1.36842, gen_loss 0.723064
Epoch 1/1, Step 4840, dis_loss: 1.3768, gen_loss 0.738048
Epoch 1/1, Step 4860, dis_loss: 1.38553, gen_loss 0.85435
Epoch 1/1, Step 4880, dis_loss: 1.36805, gen_loss 0.872317
Epoch 1/1, Step 4900, dis_loss: 1.37831, gen_loss 0.829882
Epoch 1/1, Step 4920, dis_loss: 1.38343, gen_loss 0.808663
Epoch 1/1, Step 4940, dis_loss: 1.38267, gen_loss 0.685
Epoch 1/1, Step 4960, dis_loss: 1.34831, gen_loss 0.666097
Epoch 1/1, Step 4980, dis_loss: 1.36832, gen_loss 1.06643
Epoch 1/1, Step 5000, dis_loss: 1.3757, gen_loss 0.746071
Epoch 1/1, Step 5020, dis_loss: 1.36055, gen_loss 0.829754
Epoch 1/1, Step 5040, dis_loss: 1.38887, gen_loss 0.915817
Epoch 1/1, Step 5060, dis_loss: 1.3841, gen_loss 0.729448
Epoch 1/1, Step 5080, dis_loss: 1.37848, gen_loss 0.745564
Epoch 1/1, Step 5100, dis_loss: 1.36337, gen_loss 0.728236
Epoch 1/1, Step 5120, dis_loss: 1.38076, gen_loss 0.776006
Epoch 1/1, Step 5140, dis_loss: 1.36752, gen_loss 0.86361
Epoch 1/1, Step 5160, dis_loss: 1.35879, gen_loss 0.789813
Epoch 1/1, Step 5180, dis_loss: 1.36736, gen_loss 0.974454
Epoch 1/1, Step 5200, dis_loss: 1.39476, gen_loss 0.86811
Epoch 1/1, Step 5220, dis_loss: 1.37784, gen_loss 0.714673
Epoch 1/1, Step 5240, dis_loss: 1.36971, gen_loss 0.871084
Epoch 1/1, Step 5260, dis_loss: 1.37881, gen_loss 0.815113
Epoch 1/1, Step 5280, dis_loss: 1.37715, gen_loss 0.829093
Epoch 1/1, Step 5300, dis_loss: 1.36566, gen_loss 0.870127
Epoch 1/1, Step 5320, dis_loss: 1.37608, gen_loss 0.876547
Epoch 1/1, Step 5340, dis_loss: 1.37821, gen_loss 0.85724
Epoch 1/1, Step 5360, dis_loss: 1.36795, gen_loss 0.819375
Epoch 1/1, Step 5380, dis_loss: 1.36623, gen_loss 0.829165
Epoch 1/1, Step 5400, dis_loss: 1.37988, gen_loss 0.782678
Epoch 1/1, Step 5420, dis_loss: 1.35097, gen_loss 1.00999
Epoch 1/1, Step 5440, dis_loss: 1.36558, gen_loss 0.901935
Epoch 1/1, Step 5460, dis_loss: 1.35886, gen_loss 0.795903
Epoch 1/1, Step 5480, dis_loss: 1.3787, gen_loss 0.821199
Epoch 1/1, Step 5500, dis_loss: 1.35463, gen_loss 0.710564
Epoch 1/1, Step 5520, dis_loss: 1.36005, gen_loss 0.894675
Epoch 1/1, Step 5540, dis_loss: 1.36328, gen_loss 0.776536
Epoch 1/1, Step 5560, dis_loss: 1.36542, gen_loss 0.856343
Epoch 1/1, Step 5580, dis_loss: 1.36492, gen_loss 0.81137
Epoch 1/1, Step 5600, dis_loss: 1.37178, gen_loss 0.791618
Epoch 1/1, Step 5620, dis_loss: 1.37869, gen_loss 0.846036
Epoch 1/1, Step 5640, dis_loss: 1.37812, gen_loss 0.752483
Epoch 1/1, Step 5660, dis_loss: 1.37169, gen_loss 0.854918
Epoch 1/1, Step 5680, dis_loss: 1.36588, gen_loss 0.84003
Epoch 1/1, Step 5700, dis_loss: 1.3745, gen_loss 0.849361
Epoch 1/1, Step 5720, dis_loss: 1.36596, gen_loss 0.794906
Epoch 1/1, Step 5740, dis_loss: 1.3735, gen_loss 0.811158
Epoch 1/1, Step 5760, dis_loss: 1.37317, gen_loss 0.894069
Epoch 1/1, Step 5780, dis_loss: 1.37154, gen_loss 0.87143
Epoch 1/1, Step 5800, dis_loss: 1.37374, gen_loss 0.835871
Epoch 1/1, Step 5820, dis_loss: 1.37528, gen_loss 0.908139
Epoch 1/1, Step 5840, dis_loss: 1.39387, gen_loss 0.711497
Epoch 1/1, Step 5860, dis_loss: 1.37243, gen_loss 0.836208
Epoch 1/1, Step 5880, dis_loss: 1.3628, gen_loss 0.818211
Epoch 1/1, Step 5900, dis_loss: 1.37229, gen_loss 0.731549
Epoch 1/1, Step 5920, dis_loss: 1.37139, gen_loss 0.78134
Epoch 1/1, Step 5940, dis_loss: 1.3777, gen_loss 0.820547
Epoch 1/1, Step 5960, dis_loss: 1.36841, gen_loss 0.678506
Epoch 1/1, Step 5980, dis_loss: 1.36037, gen_loss 0.7418
Epoch 1/1, Step 6000, dis_loss: 1.37791, gen_loss 0.746743
Epoch 1/1, Step 6020, dis_loss: 1.3677, gen_loss 0.848141
Epoch 1/1, Step 6040, dis_loss: 1.37056, gen_loss 0.850274
Epoch 1/1, Step 6060, dis_loss: 1.37814, gen_loss 0.816901
Epoch 1/1, Step 6080, dis_loss: 1.36717, gen_loss 0.759532
Epoch 1/1, Step 6100, dis_loss: 1.37281, gen_loss 0.820155
Epoch 1/1, Step 6120, dis_loss: 1.39086, gen_loss 0.793975
Epoch 1/1, Step 6140, dis_loss: 1.3757, gen_loss 0.821894
Epoch 1/1, Step 6160, dis_loss: 1.36536, gen_loss 0.842251
Epoch 1/1, Step 6180, dis_loss: 1.37956, gen_loss 0.878349
Epoch 1/1, Step 6200, dis_loss: 1.36016, gen_loss 0.808642
Epoch 1/1, Step 6220, dis_loss: 1.37621, gen_loss 0.818514
Epoch 1/1, Step 6240, dis_loss: 1.37522, gen_loss 0.816714
Epoch 1/1, Step 6260, dis_loss: 1.36623, gen_loss 0.932587
Epoch 1/1, Step 6280, dis_loss: 1.37423, gen_loss 0.786051
Epoch 1/1, Step 6300, dis_loss: 1.35519, gen_loss 0.717955
Epoch 1/1, Step 6320, dis_loss: 1.38335, gen_loss 0.794219

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]:
 
In [ ]: